STRUDEL: Self-training with Uncertainty Dependent Label Refinement Across Domains

نویسندگان

چکیده

We propose an unsupervised domain adaptation (UDA) approach for white matter hyperintensity (WMH) segmentation, which uses Self-TRaining with Uncertainty DEpendent Label refinement (STRUDEL). Self-training has recently been introduced as a highly effective method UDA, is based on self-generated pseudo labels. However, labels can be very noisy and therefore deteriorate model performance. to predict the uncertainty of integrate it in training process uncertainty-guided loss function highlight high certainty. STRUDEL further improved by incorporating segmentation output existing label generation that showed robustness WMH segmentation. In our experiments, we evaluate standard U-Net modified network higher receptive field. Our results across datasets demonstrate significant improvement respect self-training.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Algorithms for Propagating Uncertainty Across Heterogeneous Domains

We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/det...

متن کامل

Label Efficient Learning of Transferable Representations across Domains and Tasks

We propose a framework that learns a representation transferable across different domains and tasks in a label efficient manner. Our approach battles domain shift with a domain adversarial loss, and generalizes the embedding to novel task using a metric learning-based approach. Our model is simultaneously optimized on labeled source data and unlabeled or sparsely labeled data in the target doma...

متن کامل

Self-localization Using Visual Experience Across Domains

— In this study, we aim to solve the single-view robot self-localization problem by using visual experience across domains. Although the bag-of-words method constitutes a popular approach to single-view localization, it fails badly when it's visual vocabulary is learned and tested in different domains. Further, we are interested in using a cross-domain setting, in which the visual vocabulary is...

متن کامل

Self-Training PCFG Grammars with Latent Annotations Across Languages

We investigate the effectiveness of selftraining PCFG grammars with latent annotations (PCFG-LA) for parsing languages with different amounts of labeled training data. Compared to Charniak’s lexicalized parser, the PCFG-LA parser was more effectively adapted to a language for which parsing has been less well developed (i.e., Chinese) and benefited more from selftraining. We show for the first t...

متن کامل

Training self‐assessment and task‐selection skills to foster self‐regulated learning: Do trained skills transfer across domains?

Students' ability to accurately self-assess their performance and select a suitable subsequent learning task in response is imperative for effective self-regulated learning. Video modeling examples have proven effective for training self-assessment and task-selection skills, and-importantly-such training fostered self-regulated learning outcomes. It is unclear, however, whether trained skills w...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2021

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-030-87589-3_32